The Robustness of Lp, p <1
نویسنده
چکیده
In robust statistics, the breakdown point of an estimator is the percentage of outliers with which an estimator still generates reliable estimation. The upper bound of breakdown point is 50%, which means it is not possible to generate reliable estimation with more than half outliers [1-2] . In this paper, it is shown that for majority of experiences, when the outliers exceed 50%, but if they are distributed randomly enough, it is still possible to generate a reliable estimation from minority good observations. The phenomenal of that the breakdown point is larger than 50% is named as super robustness. And, in this paper, a robust estimator is called strict robust if it generates a perfect estimation when all the good observations are perfect. More specifically, the super robustness of the maximum likelihood estimator of the exponential power distribution, or p L estimation, where 1 p , is investigated. This paper starts with proving that p L ( 1 p ) is a strict robust location estimator. Further, it is proved that p L has the property of strict super-robustness on translation, rotation, scaling transformation and robustness on Euclidean transform. 1. Parameter Estimation Problem A system that transforms an input I to an output O with a transformationT is defined mathematically as below: I T O (1) Estimating the transformation T based on a group of input and output pairs of a system is a central and challenging problem in many pattern matching and computer vision systems. Typical examples are medical image registration, fingerprint matching, and camera model estimation. The following concepts will be used in this paper: Estimator An estimation approach to generate the system parameters based on groups of observations. Experiment A group of observations that is used to generate an estimated transformation. Robustness The characteristics of an estimator that the estimated transformation whose error to the ideal (best) estimation is bounded even when all the noise observations move to infinite. Strict robustness The capability of an estimator that gives the perfect estimation even when the good observations are perfect but the noise observations have any possible distribution. Super robustness The characteristics that an estimator generates an estimated transformation whose error to the perfect estimation is bounded even when the noise observations are majority, and they move to infinite. Strict super robustness The characteristics that an estimator generates the perfect estimation when the good observations are perfect and the noise observations are majority. Breakdown point The percentage of noise that a robust estimator tolerates is called its breakdown point. By “tolerates”, it means no matter how the noise observations are distributed, the estimated result still has bounded error with the ideal estimated value. For robust estimators, the upper bound of breakdown point is 50%. Even though, it is not possible that an estimator tolerates majority noise observation in any distribution, however, it is possible that an estimator tolerates the majority noise observations in special distributions, and even in the majority of the distributions. This paper will investigate the robustness and super-robustness characteristics of p L where p<1 on translation, Euclidean transformation and scaling transformation. Suppose N I I I , , 2 1 are N inputs of a system defined in the formulae (1), where i I is a point in an Euclidean space, and N O O O , , 2 1 are the corresponding outputs, where i O is a point in an Euclidean space that may have different dimension than the input space. For a transformation T , we define the difference of i O and ) ( i I T as )) ( , ( i i I T O d (2) Thus, the overall difference between the observed outputs and the estimated outputs based on T is N i i i I T O d 1 )) ( , ( (3) The problem to estimate T becomes that find a b T , which satisfies: N i i i T T I T O d D b 1 )) ( , ( min (4) The minimum takes on any possible transformationT in a predefined transformation group. The transformation groups that will be discussed in this paper are translation, scaling transformation and Euclidean transformation. When d is Euclidean distance, it is the least square estimation. In this paper, we use p L (p<1) to define the difference, that is, the difference of i O and ) ( i I T is p i i I T O ) ( (5) Thus, the estimation problem is converted to that find a b T that satisfies: N
منابع مشابه
Robustness of weighted L p - depth and L p - median
Lp-norm weighted depth functions are introduced and the local and global robustness of these weighted Lp-depth functions and their induced multivariate medians are investigated via influence function and finite sample breakdown point. To study the global robustness of depth functions, a notion of finite sample breakdown point is introduced. The weighted Lp-depth functions turn out to have the s...
متن کاملA new low power high reliability flip-flop robust against process variations
Low scaling technology makes a significant reduction in dimension and supply voltage, and lead to new challenges about power consumption such as increasing nodes sensitivity over radiation-induced soft errors in VLSI circuits. In this area, different design methods have been proposed to low power flip-flops and various research studies have been done to reach a suitable hardened flip-flops. In ...
متن کاملEXPONENTIAL DICHOTOMY OF DIFFERENCE EQUATIONS IN lp -PHASE SPACES ON THE HALF-LINE
For a sequence of bounded linear operators {An}n=0 on a Banach space X , we investigate the characterization of exponential dichotomy of the difference equations vn+1 = Anvn. We characterize the exponential dichotomy of difference equations in terms of the existence of solutions to the equations vn+1 = Anvn + fn in lp spaces (1 ≤ p <∞). Then we apply the results to study the robustness of expon...
متن کاملTotally probabilistic Lp spaces
In this paper, we introduce the notion of probabilistic valued measures as a generalization of non-negative measures and construct the corresponding Lp spaces, for distributions p > "0. It is alsoshown that if the distribution p satises p "1 then, as in the classical case, these spaces are completeprobabilistic normed spaces.
متن کاملOn finite gain Lp stability of nonlinear sampled-data systems
It is shown that uniform global exponential stability of the input-free discrete-time model of a globally Lipschitz sampled-data time-varying nonlinear system with inputs implies finite gain Lp stability of the sampled-data system for all p ∈ [1,∞]. This result generalizes results on Lp stability of sampled-data linear systems and it is an important tool for analysis of robustness of sampled-da...
متن کاملLp-regularized optimization by using orthant-wise approach for inducing sparsity
Sparsity induced in the optimized weights effectively works for factorization with robustness to noises and for classification with feature selection. For enhancing the sparsity, L1 regularization is introduced into the objective cost function to be minimized. In general, however, Lp (p<1) regularization leads to more sparse solutions than L1, though Lp regularized problem is difficult to be ef...
متن کامل